142 research outputs found

    Markov chains and optimality of the Hamiltonian cycle

    Get PDF
    We consider the Hamiltonian cycle problem (HCP) embedded in a controlled Markov decision process. In this setting, HCP reduces to an optimization problem on a set of Markov chains corresponding to a given graph. We prove that Hamiltonian cycles are minimizers for the trace of the fundamental matrix on a set of all stochastic transition matrices. In case of doubly stochastic matrices with symmetric linear perturbation, we show that Hamiltonian cycles minimize a diagonal element of a fundamental matrix for all admissible values of the perturbation parameter. In contrast to the previous work on this topic, our arguments are primarily based on probabilistic rather than algebraic methods

    The Impact of Litigation on Venture Capitalist Reputation

    Get PDF
    Venture capital contracts give VCs enormous power over entrepreneurs and early equity investors of portfolio companies. A large literature examines how these contractual terms protect VCs against misbehavior by entrepreneurs. But what constrains misbehavior by VCs? We provide the first systematic analysis of legal and non-legal mechanisms that penalize VC misbehavior, even when such misbehavior is formally permitted by contract. We hand-collect a sample of over 177 lawsuits involving venture capitalists. The three most common types of VC-related litigation are: 1) lawsuits filed by entrepreneurs, which most often allege freezeout and transfer of control away from founders; 2) lawsuits filed by early equity investors in startup companies; and 3) lawsuits filed by VCs. Our empirical analysis of the lawsuit data proceeds in two steps. We first estimate an empirical model of the propensity of VCs to get involved in litigation as a function of VC characteristics. We match each venture firm that was involved in litigation to otherwise similar venture firm that was not involved in litigation and find that less reputable VCs are more likely to participate in litigation, as are VCs focusing on early-stage investments, and VCs with larger deal flow. Second, we analyze the relationship between different types of lawsuits and VC fundraising and deal flow. Although plaintiffs lose most VC-related lawsuits, litigation does not go unnoticed: in subsequent years, the involved VCs raise significantly less capital than their peers and invest in fewer deals. The biggest losers are VCs who were defendants in a lawsuit, and especially VCs who were alleged to have expropriated founders.

    Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling

    Get PDF
    Identifying a coupled dynamical system out of many plausible candidates, each of which could serve as the underlying generator of some observed measurements, is a profoundly ill posed problem that commonly arises when modelling real world phenomena. In this review, we detail a set of statistical procedures for inferring the structure of nonlinear coupled dynamical systems (structure learning), which has proved useful in neuroscience research. A key focus here is the comparison of competing models of (ie, hypotheses about) network architectures and implicit coupling functions in terms of their Bayesian model evidence. These methods are collectively referred to as dynamical casual modelling (DCM). We focus on a relatively new approach that is proving remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid evaluation and comparison of models that differ in their network architecture. We illustrate the usefulness of these techniques through modelling neurovascular coupling (cellular pathways linking neuronal and vascular systems), whose function is an active focus of research in neurobiology and the imaging of coupled neuronal systems

    Investigating cortico-striatal beta oscillations in Parkinson's disease cognitive decline

    Get PDF
    This scientific commentary refers to ‘Corticostriatal beta oscillation changes associated with cognitive function in Parkinson’s disease’ by Paulo et al. (https://doi.org/10.1093/brain/awad206)

    A tutorial on group effective connectivity analysis, part 2: second level analysis with PEB

    Get PDF
    This tutorial provides a worked example of using Dynamic Causal Modelling (DCM) and Parametric Empirical Bayes (PEB) to characterise inter-subject variability in neural circuitry (effective connectivity). This involves specifying a hierarchical model with two or more levels. At the first level, state space models (DCMs) are used to infer the effective connectivity that best explains a subject's neuroimaging timeseries (e.g. fMRI, MEG, EEG). Subject-specific connectivity parameters are then taken to the group level, where they are modelled using a General Linear Model (GLM) that partitions between-subject variability into designed effects and additive random effects. The ensuing (Bayesian) hierarchical model conveys both the estimated connection strengths and their uncertainty (i.e., posterior covariance) from the subject to the group level; enabling hypotheses to be tested about the commonalities and differences across subjects. This approach can also finesse parameter estimation at the subject level, by using the group-level parameters as empirical priors. We walk through this approach in detail, using data from a published fMRI experiment that characterised individual differences in hemispheric lateralization in a semantic processing task. The preliminary subject specific DCM analysis is covered in detail in a companion paper. This tutorial is accompanied by the example dataset and step-by-step instructions to reproduce the analyses

    Neurovascular coupling: insights from multi-modal dynamic causal modelling of fMRI and MEG

    Full text link
    This technical note presents a framework for investigating the underlying mechanisms of neurovascular coupling in the human brain using multi-modal magnetoencephalography (MEG) and functional magnetic resonance (fMRI) neuroimaging data. This amounts to estimating the evidence for several biologically informed models of neurovascular coupling using variational Bayesian methods and selecting the most plausible explanation using Bayesian model comparison. First, fMRI data is used to localise active neuronal sources. The coordinates of neuronal sources are then used as priors in the specification of a DCM for MEG, in order to estimate the underlying generators of the electrophysiological responses. The ensuing estimates of neuronal parameters are used to generate neuronal drive functions, which model the pre or post synaptic responses to each experimental condition in the fMRI paradigm. These functions form the input to a model of neurovascular coupling, the parameters of which are estimated from the fMRI data. This establishes a Bayesian fusion technique that characterises the BOLD response - asking, for example, whether instantaneous or delayed pre or post synaptic signals mediate haemodynamic responses. Bayesian model comparison is used to identify the most plausible hypotheses about the causes of the multimodal data. We illustrate this procedure by comparing a set of models of a single-subject auditory fMRI and MEG dataset. Our exemplar analysis suggests that the origin of the BOLD signal is mediated instantaneously by intrinsic neuronal dynamics and that neurovascular coupling mechanisms are region-specific. The code and example dataset associated with this technical note are available through the statistical parametric mapping (SPM) software package

    Breaking the Circularity in Circular Analyses: Simulations and Formal Treatment of the Flattened Average Approach

    Get PDF
    There has been considerable debate and concern as to whether there is a replication crisis in the scientific literature. A likely cause of poor replication is the multiple comparisons problem. An important way in which this problem can manifest in the M/EEG context is through post hoc tailoring of analysis windows (a.k.a. regions-of-interest, ROIs) to landmarks in the collected data. Post hoc tailoring of ROIs is used because it allows researchers to adapt to inter-experiment variability and discover novel differences that fall outside of windows defined by prior precedent, thereby reducing Type II errors. However, this approach can dramatically inflate Type I error rates. One way to avoid this problem is to tailor windows according to a contrast that is orthogonal (strictly parametrically orthogonal) to the contrast being tested. A key approach of this kind is to identify windows on a fully flattened average. On the basis of simulations, this approach has been argued to be safe for post hoc tailoring of analysis windows under many conditions. Here, we present further simulations and mathematical proofs to show exactly why the Fully Flattened Average approach is unbiased, providing a formal grounding to the approach, clarifying the limits of its applicability and resolving published misconceptions about the method. We also provide a statistical power analysis, which shows that, in specific contexts, the fully flattened average approach provides higher statistical power than Fieldtrip cluster inference. This suggests that the Fully Flattened Average approach will enable researchers to identify more effects from their data without incurring an inflation of the false positive rate
    • …
    corecore